Pre-trained language models for programming languages have shown a powerful ability on processing many Software Engineering (SE) tasks, e.g., program synthesis, code completion, and code search. However, it remains to be seen what is behind their success. Recent studies have examined how pre-trained models can effectively learn syntax information based on Abstract Syntax Trees. In this paper, we figure out what role the self-attention mechanism plays in understanding code syntax and semantics based on AST and static analysis. We focus on a well-known representative code model, CodeBERT, and study how it can learn code syntax and semantics by the self-attention mechanism and Masked Language Modelling (MLM) at the token level. We propose a group of probing tasks to analyze CodeBERT. Based on AST and static analysis, we establish the relationships among the code tokens. First, Our results show that CodeBERT can acquire syntax and semantics knowledge through self-attention and MLM. Second, we demonstrate that the self-attention mechanism pays more attention to dependence-relationship tokens than to other tokens. Different attention heads play different roles in learning code semantics; we show that some of them are weak at encoding code semantics. Different layers have different competencies to represent different code properties. Deep CodeBERT layers can encode the semantic information that requires some complex inference in the code context. More importantly, we show that our analysis is helpful and leverage our conclusions to improve CodeBERT. We show an alternative approach for pre-training models, which makes fully use of the current pre-training strategy, i.e, MLM, to learn code syntax and semantics, instead of combining features from different code data formats, e.g., data-flow, running-time states, and program outputs.
translated by 谷歌翻译
Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation. Existing methods rely on the heavy SR models to enhance low-resolution (LR) images of different degradation levels, which significantly restricts their practical deployments on resource-limited devices. In this paper, we propose a novel Dynamic Channel Splitting scheme for efficient Real-world Image Super-Resolution, termed DCS-RISR. Specifically, we first introduce the light degradation prediction network to regress the degradation vector to simulate the real-world degradations, upon which the channel splitting vector is generated as the input for an efficient SR model. Then, a learnable octave convolution block is proposed to adaptively decide the channel splitting scale for low- and high-frequency features at each block, reducing computation overhead and memory cost by offering the large scale to low-frequency features and the small scale to the high ones. To further improve the RISR performance, Non-local regularization is employed to supplement the knowledge of patches from LR and HR subspace with free-computation inference. Extensive experiments demonstrate the effectiveness of DCS-RISR on different benchmark datasets. Our DCS-RISR not only achieves the best trade-off between computation/parameter and PSNR/SSIM metric, and also effectively handles real-world images with different degradation levels.
translated by 谷歌翻译
The deep learning community has witnessed an exponentially growing interest in self-supervised learning (SSL). However, it still remains unexplored how to build a framework for learning useful representations of raw music waveforms in a self-supervised manner. In this work, we design Music2Vec, a framework exploring different SSL algorithmic components and tricks for music audio recordings. Our model achieves comparable results to the state-of-the-art (SOTA) music SSL model Jukebox, despite being significantly smaller with less than 2% of parameters of the latter. The model will be released on Huggingface(Please refer to: https://huggingface.co/m-a-p/music2vec-v1)
translated by 谷歌翻译
Recent success of vision transformers has inspired a series of vision backbones with novel feature transformation paradigms, which report steady performance gain. Although the novel feature transformation designs are often claimed as the source of gain, some backbones may benefit from advanced engineering techniques, which makes it hard to identify the real gain from the key feature transformation operators. In this paper, we aim to identify real gain of popular convolution and attention operators and make an in-depth study of them. We observe that the main difference among these feature transformation modules, e.g., attention or convolution, lies in the way of spatial feature aggregation, or the so-called "spatial token mixer" (STM). Hence, we first elaborate a unified architecture to eliminate the unfair impact of different engineering techniques, and then fit STMs into this architecture for comparison. Based on various experiments on upstream/downstream tasks and the analysis of inductive bias, we find that the engineering techniques boost the performance significantly, but the performance gap still exists among different STMs. The detailed analysis also reveals some interesting findings of different STMs, such as effective receptive fields and invariance tests. The code and trained models will be publicly available at https://github.com/OpenGVLab/STM-Evaluation
translated by 谷歌翻译
The application of superconducting materials is becoming more and more widespread. Traditionally, the discovery of new superconducting materials relies on the experience of experts and a large number of "trial and error" experiments, which not only increases the cost of experiments but also prolongs the period of discovering new superconducting materials. In recent years, machine learning has been increasingly applied to materials science. Based on this, this manuscript proposes the use of XGBoost model to identify superconductors; the first application of deep forest model to predict the critical temperature of superconductors; the first application of deep forest to predict the band gap of materials; and application of a new sub-network model to predict the Fermi energy level of materials. Compared with our known similar literature, all the above algorithms reach state-of-the-art. Finally, this manuscript uses the above models to search the COD public dataset and identify 50 candidate superconducting materials with possible critical temperature greater than 90 K.
translated by 谷歌翻译
我们考虑由一般随机序列驱动的随机梯度下降(SGD)算法,包括I.I.D噪声和随机行走,在任意图上等等;并以渐近意义进行分析。具体而言,我们采用了“效率排序”的概念,这是一种分析的工具,用于比较马尔可夫链蒙特卡洛(MCMC)采样器的性能,以sgd算法的形式以与量表矩阵相关的loewner订购形式长期。使用此顺序,我们表明对MCMC采样更有效的输入序列也导致限制中SGD算法的误差的较小协方差。这也表明,当受到更有效的链驱动时,任意加权的SGD迭代的MSE迭代会变小。我们的发现在分散的优化和群学习等应用程序中特别感兴趣,其中SGD是在基础通信图上以随机步行方式实施的,以解决成本问题和/或数据隐私。我们证明了某些非马克维亚过程如何在基于典型的混合时间的非轴突界限上是棘手的,在SGD的效率订购意义上,可以超越其马尔可夫对应物。我们通过将其应用于梯度下降,并以洗牌和小批量梯度下降将其应用于梯度下降,从而显示了我们的方法的实用性,从而在统一框架下重申了现有文献的关键结果。从经验上讲,我们还观察到SGD的变体(例如加速SGD和Adam)的效率排序,开辟了将我们的效率订购概念扩展到更广泛的随机优化算法的可能性。
translated by 谷歌翻译
目前正在辩论中,将人工智能应用于科学问题(即科学的AI)。但是,科学问题与传统的问题,图像,文本等等传统问题有很大不同,在这些问题中,由于不平衡的科学数据和物理设置的复杂效果出现了新的挑战。在这项工作中,我们证明了深卷卷神经网络(CNN)在存在强热波动和不平衡数据的情况下重建晶格拓扑(即自旋连接性)的有效性。以Glauber动力学为例,以动力学模型为例,CNN映射了从特定的初始配置(称为演化实例)演变为时期的局部磁矩(单个节点特征),以映射到概率的概率可能的耦合。我们的方案与以前可能需要有关节点动力学的知识,来自扰动的响应或统计量的评估(例如相关性或转移熵)与许多进化实例的评估。微调避免了高温下强烈的热波动引起的“贫瘠高原”。可以进行准确的重建,如果热波动在相关性上占主导地位,从而总体上失败的统计方法。同时,我们揭示了CNN的概括,以处理从不太初始旋转构型和带有未经晶格的实例演变而来的实例。我们在几乎“双重指数”大型样本空间中使用不平衡的数据提出了一个关于学习的公开问题。
translated by 谷歌翻译
人工智能的最新趋势是将验证的模型用于语言和视觉任务,这些模型已经实现了非凡的表现,但也令人困惑。因此,以各种方式探索这些模型的能力对该领域至关重要。在本文中,我们探讨了模型的可靠性,在其中我们将可靠的模型定义为一个不仅可以实现强大的预测性能,而且在许多涉及不确定性(例如选择性预测,开放式设置识别)的决策任务上,在许多决策任务上表现出色,而且表现良好。强大的概括(例如,准确性和适当的评分规则,例如在分布数据集中和分发数据集上的对数可能性)和适应性(例如,主动学习,几乎没有射击不确定性)。我们设计了40个数据集的10种任务类型,以评估视觉和语言域上可靠性的不同方面。为了提高可靠性,我们分别开发了VIT-PLEX和T5-PLEX,分别针对视觉和语言方式扩展了大型模型。 PLEX极大地改善了跨可靠性任务的最先进,并简化了传统协议,因为它可以改善开箱即用的性能,并且不需要设计分数或为每个任务调整模型。我们演示了高达1B参数的模型尺寸的缩放效果,并预处理数据集大小最多4B示例。我们还展示了PLEX在具有挑战性的任务上的功能,包括零射门的开放式识别,主动学习和对话语言理解中的不确定性。
translated by 谷歌翻译
及时调整尝试更新预训练模型中的一些特定任务参数。它的性能与在语言理解和发电任务上的完整参数设置的微调相当。在这项工作中,我们研究了迅速调整神经文本检索器的问题。我们引入参数效率的及时调整,以调整跨内域,跨域和跨主题设置的文本检索。通过广泛的分析,我们表明该策略可以通过基于微调的检索方法来减轻两个问题 - 参数 - 信息和弱推广性。值得注意的是,它可以显着改善检索模型的零零弹性概括。通过仅更新模型参数的0.1%,及时调整策略可以帮助检索模型获得比所有参数更新的传统方法更好的概括性能。最后,为了促进回猎犬的跨主题概括性的研究,我们策划并发布了一个学术检索数据集,其中包含18K查询的87个主题,使其成为迄今为止特定于特定于主题的主题。
translated by 谷歌翻译
在机器学习(ML)社区中,低阶功能方差分析(FAROVA)模型以固有的可解释的机器学习为幌子。可解释的提升机或EBM(Lou等人,2013年)和Gami-Net(Yang等,2021)是最近提出的两种用于拟合功能性主要效应和二阶相互作用的ML算法。我们提出了一种称为Gami-Tree的新算法,类似于EBM,但具有许多可带来更好性能的功能。它使用基于模型的树作为基础学习者,并结合了一种新的交互过滤方法,可以更好地捕获基础交互。此外,我们的迭代训练方法会收敛到具有更好的预测性能的模型,并且嵌入式纯化确保相互作用在层次上是正交的,与主要效应是正交的。该算法不需要广泛的调整,我们的实施是快速有效的。我们使用模拟和真实数据集比较Gami-Tree与EBM和GAMI-NET的性能和解释性。
translated by 谷歌翻译